Picture for Xiaoqi Li

Xiaoqi Li

CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation

Add code
May 04, 2025
Viaarxiv icon

3DWG: 3D Weakly Supervised Visual Grounding via Category and Instance-Level Alignment

Add code
May 03, 2025
Viaarxiv icon

A Comprehensive Study of Exploitable Patterns in Smart Contracts: From Vulnerability to Defense

Add code
Apr 30, 2025
Viaarxiv icon

AI-Based Vulnerability Analysis of NFT Smart Contracts

Add code
Apr 24, 2025
Viaarxiv icon

Mining Characteristics of Vulnerable Smart Contracts Across Lifecycle Stages

Add code
Apr 21, 2025
Viaarxiv icon

HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model

Add code
Mar 13, 2025
Viaarxiv icon

SCALM: Detecting Bad Practices in Smart Contracts Through LLMs

Add code
Feb 04, 2025
Viaarxiv icon

ManipGPT: Is Affordance Segmentation by Large Vision Models Enough for Articulated Object Manipulation?

Add code
Dec 13, 2024
Figure 1 for ManipGPT: Is Affordance Segmentation by Large Vision Models Enough for Articulated Object Manipulation?
Figure 2 for ManipGPT: Is Affordance Segmentation by Large Vision Models Enough for Articulated Object Manipulation?
Figure 3 for ManipGPT: Is Affordance Segmentation by Large Vision Models Enough for Articulated Object Manipulation?
Figure 4 for ManipGPT: Is Affordance Segmentation by Large Vision Models Enough for Articulated Object Manipulation?
Viaarxiv icon

Human-centered In-building Embodied Delivery Benchmark

Add code
Jun 25, 2024
Viaarxiv icon

SpatialBot: Precise Spatial Understanding with Vision Language Models

Add code
Jun 19, 2024
Viaarxiv icon